An algorithm using divide-and-conquer tries to break down the problem into smaller subproblems which are easier to solve. Than it solves the easier problems and combines the result to solve the main problem. A recursive algorithm calls itself until it reaches the trivial case. This case is easily solved and the result can be used to finish the calling function instance and so on until the main call can be solved. So a recursive algorithm uses divide-and-conquer. Divide and conquer usually means to divide into two equal halves whereas in recursion the reduction is mostly done by 1 unit. Example: Computing the factorial of a number can be done in a recursive way. The problem is divided into subproblems by calculating recursively the factorial of the next smaller number and multiplying it with the original number. This goes on until it reaches the trivial case that the number is 1 and the algorithm returns 1.
An algorithm which uses greedy search tries to solve a probem by choosing stepwise the state which is the best at the moment of choice. Therefore it often yields not the optimal answer but instead a local optimum. In simulated annealing we consider the neighbouring possibly sub-optimal solutions upto a certain probability and the probablity value reduces as we move forward in the algorithm. Simulated annealing is a better algorithm for finding solutions near to the optimal solution because it does not get stuck in local optimums that easy.
In [1]:
# The data we compute on.
import pandas as pd
NaN = float('nan')
manhattan = pd.DataFrame({0:[(3,1),(3,4),(0,4),(3,5),(1,NaN)],
1:[(2,0),(2,6),(7,4),(3,6),(3,NaN)],
2:[(4,2),(4,5),(3,5),(0,8),(2,NaN)],
3:[(0,4),(2,2),(4,2),(2,5),(2,NaN)],
4:[(NaN,3),(NaN,1),(NaN,1),(NaN,3),(NaN,NaN)]})
manhattan
Out[1]:
In [2]:
#Two helper functions to get the horizontal or
#vertical distances for a given manhattan.
def get_hdist(pos, manhattan):
'''
The function returns the score of the horizontal
way leaving the given position in a given manhattan.
'''
x, y = pos
assert x < manhattan.shape[0]
try:
return manhattan[x][y][0]
except IndexError:
return float('nan')
def get_vdist(pos, manhattan):
'''
The function returns the score of the vertical
way leaving the given position in a given manhattan.
'''
x, y = pos
assert y < manhattan.shape[1]
try:
return manhattan[x][y][1]
except IndexError:
return float('nan')
In [3]:
def greedy_manhattan(manhattan):
'''
The functions implements a greedy algorithm to calculate a
solution to the Manhattan Tourist Problem. It returns the
path as a list of tuples and the score of the path.
'''
pos = [0,0]
end = [manhattan.shape[0]-1, manhattan.shape[1]-1]
finish = False
path = [(0,0)]
score = 0
while not finish:
vdist = get_vdist(pos, manhattan)
hdist = get_hdist(pos, manhattan)
if hdist > vdist:
score += hdist
pos[0] += 1
else: # goes vertical if vdist == hdist
score += vdist
pos[1] += 1
path.append(tuple(pos))
if pos == end:
finish = True
return path, score
In [4]:
# Testing the greedy algorithm and solving Question II a and b
path, score = greedy_manhattan(manhattan)
print('Score achieved by the greedy algorithm: {}'.format(score))
print('The corresponding path is: \n{}\n{}'.format(path[:4],
path[4:]))
In [5]:
def rec_manhattan(pos, manhattan):
'''
The functions implements a recursive algorithm to calculate
the optimal solution for the Manhattan Tourist Problem.
It returns the path as a list of tuples and the score of the path.
'''
x, y = pos
# Trivial case
if x == 0 and y == 0:
new_pos = pos
score = 0
path = []
return path, score
elif x == 0:
new_pos = (x,y-1)
score = rec_manhattan((x,y-1),manhattan)[1] + manhattan[x][y-1][1]
path = rec_manhattan((x,y-1),manhattan)[0] + [new_pos]
return path, score
elif y == 0:
new_pos = (x-1,y)
score = rec_manhattan((x-1,y),manhattan)[1] + manhattan[x-1][y][0]
path = rec_manhattan((x-1,y),manhattan)[0] + [new_pos]
return path, score
# Recursive part
else:
wayX = rec_manhattan((x-1,y), manhattan)[1] + manhattan[x-1][y][0]
wayY = rec_manhattan((x,y-1), manhattan)[1] + manhattan[x][y-1][1]
if wayX > wayY:
maximum = wayX
new_pos = (x-1,y)
path = rec_manhattan((x-1,y), manhattan)[0] + [new_pos]
else:
maximum = wayY
new_pos = (x,y-1)
path = rec_manhattan((x,y-1), manhattan)[0] + [new_pos]
return path, maximum
In [6]:
# Testing the recursive algorithm and solving Question II a and b
path, score = rec_manhattan((4,4), manhattan)
print('Score achieved by the recursive algorithm: {}'.format(score))
print('The corresponding path is: \n{}\n{}'.format(path[:4],
path[4:]+[(4,4)]))
In [7]:
Array = [1,98,27,3,9,8,745,9812,374,18,256,128]
for i in range(0, len(Array)):
for j in range(i+1, len(Array)):
if Array[j] < Array[i]:
Array[i], Array[j] = Array[j], Array[i]
print(Array)
The complexity is in $O(n^2)$
In [8]:
ArrayA = [1,4,6,8,10,13,17]
ArrayB = [3,8,9,10,33]
ArrayD = []
def merge(a, b):
'''
This function merges two sorted lists.
'''
sorted = False
a = a[:]
b = b[:]
c = []
while sorted == False:
try:
if a[0] < b[0]:
c.append(a.pop(0))
else:
c.append(b.pop(0))
#print(c)
except IndexError:
sorted = True
c = c + a + b
return c
print(merge(ArrayA, ArrayB))
print(merge(ArrayB, ArrayA))
print(merge(ArrayD, ArrayA))
print(merge(ArrayA, ArrayD))
In [9]:
Array = [1,98,27,3,9,8,745,9812,374,18,256,128]
def mergeSort(a):
'''
This function implements the merge sort algorithm.
Therefore it uses the merge function.
'''
if len(a) == 1:
return a
mid = len(a)//2
return merge(mergeSort(a[:mid]), mergeSort(a[mid:]))
print(mergeSort(Array))
(a) \begin{align*} f(n) & = n+(n-1)+n-2+...+1 \\ f(n) & = \sum_{i=1}^n n \\ f(n) & \in \mathcal{O}(n)\\ \end{align*}
(b) \begin{align*} f(n) & = n^2 +n*log(n) +2^n \\ f(n) & \in \mathcal{O}(2^n) \\ \end{align*}